Type your name here.
Implement the iterative-deepening search algorithm as discussed in our lecture notes and as shown in figures 3.17 and 3.18 in our text book. Apply it to the 8-puzzle and a second puzzle of your choice.
In this jupyter notebook, implement the following functions:
iterative_deepening_search(start_state, goal_state, actions_f, take_action_f, max_depth)
depth_limited_search(start_state, goal_state, actions_f, take_action_f, depth_limit)
depth_limited_search
is called by iterative_deepening_search
with depth_limit
s of $0, 1, \ldots, $ max_depth
. Both must return either the solution path as a list of states, or the strings 'cutoff'
or 'failure'
. 'failure'
signifies that all states were searched and the goal was not found.
Each receives the arguments
actions_f
that is given a state and returns a list of valid actions from that state,take_action_f
that is given a state and an action and returns the new state that results from applying the action to the state,depth_limit
for depth_limited_search
, or max_depth
for iterative_deepening_search
.Use your solution to solve the 8-puzzle. Implement the state of the puzzle as a list of integers. 0 represents the empty position.
Required functions for the 8-puzzle are the following.
find_blank_8p(state)
: return the row and column index for the location of the blank (the 0 value).actions_f_8p(state)
: returns a list of up to four valid actions that can be applied in state
. Return them in the order left
, right
, up
, down
, though only if each one is a valid action.take_action_f_8p(state, action)
: return the state that results from applying action
in state
.print_state_8p(state)
: prints the state as a 3 x 3 table, as shown in lecture notes, or a bit fancier with, for example, '-' and '|' characters to separate tiles. This function is useful to call when debugging your search algorithms.print_path_8p(start_state, goal_state, path)
: print a solution path in a readable form by calling print_state_8p
.Also, implement a second search problem of your choice. Apply your iterative_deepening_search
function to it.
Here are some example results.
start_state = [1, 0, 3, 4, 2, 5, 6, 7, 8]
print_state_8p(start_state)
1 - 3 4 2 5 6 7 8
find_blank_8p(start_state)
(0, 1)
actions_f_8p(start_state)
['left', 'right', 'down']
take_action_f_8p(start_state, 'down')
[1, 2, 3, 4, 0, 5, 6, 7, 8]
print_state_8p(take_action_f_8p(start_state, 'down'))
1 2 3 4 - 5 6 7 8
goal_state = take_action_f_8p(start_state, 'down')
new_state = take_action_f_8p(start_state, 'down')
new_state == goal_state
True
start_state
[1, 0, 3, 4, 2, 5, 6, 7, 8]
path = depth_limited_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 3)
path
[[0, 1, 3, 4, 2, 5, 6, 7, 8], [1, 0, 3, 4, 2, 5, 6, 7, 8], [1, 2, 3, 4, 0, 5, 6, 7, 8]]
Notice that depth_limited_search
result is missing the start state. This is inserted by iterative_deepening_search
.
But, when we try iterative_deepening_search
to do the same search, it finds a shorter path!
path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 3)
path
[[1, 0, 3, 4, 2, 5, 6, 7, 8], [1, 2, 3, 4, 0, 5, 6, 7, 8]]
Also notice that the successor states are lists, not tuples. This is okay, because the search functions for this assignment do not make use of python dictionaries.
start_state = [4, 7, 2, 1, 6, 5, 0, 3, 8]
path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 3)
path
'cutoff'
start_state = [4, 7, 2, 1, 6, 5, 0, 3, 8]
path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 5)
path
'cutoff'
Humm...maybe we can't reach the goal state from this state. We need a way to randomly generate a valid start state.
import random
random.choice(['left', 'right', 'down', 'up'])
'down'
def random_start_state(goal_state, actions_f, take_action_f, n_steps):
state = goal_state
for i in range(n_steps):
state = take_action_f(state, random.choice(actions_f(state)))
return state
goal_state = [1, 2, 3, 4, 0, 5, 6, 7, 8]
random_start_state(goal_state, actions_f_8p, take_action_f_8p, 10)
[1, 2, 0, 4, 5, 3, 6, 7, 8]
start_state = random_start_state(goal_state, actions_f_8p, take_action_f_8p, 50)
start_state
[2, 7, 0, 1, 5, 3, 4, 6, 8]
path = iterative_deepening_search(start_state, goal_state, actions_f_8p, take_action_f_8p, 20)
path
[[2, 7, 0, 1, 5, 3, 4, 6, 8], [2, 7, 3, 1, 5, 0, 4, 6, 8], [2, 7, 3, 1, 0, 5, 4, 6, 8], [2, 0, 3, 1, 7, 5, 4, 6, 8], [0, 2, 3, 1, 7, 5, 4, 6, 8], [1, 2, 3, 0, 7, 5, 4, 6, 8], [1, 2, 3, 4, 7, 5, 0, 6, 8], [1, 2, 3, 4, 7, 5, 6, 0, 8], [1, 2, 3, 4, 0, 5, 6, 7, 8]]
Let's print out the state sequence in a readable form.
for p in path:
print_state_8p(p)
print()
2 7 - 1 5 3 4 6 8 2 7 3 1 5 - 4 6 8 2 7 3 1 - 5 4 6 8 2 - 3 1 7 5 4 6 8 - 2 3 1 7 5 4 6 8 1 2 3 - 7 5 4 6 8 1 2 3 4 7 5 - 6 8 1 2 3 4 7 5 6 - 8 1 2 3 4 - 5 6 7 8
Here is one way to format the search problem and solution in a readable form.
print_path_8p(start_state, goal_state, path)
Path from 2 7 - 1 5 3 4 6 8 to 1 2 3 4 - 5 6 7 8 is 9 nodes long: 2 7 - 1 5 3 4 6 8 2 7 3 1 5 - 4 6 8 2 7 3 1 - 5 4 6 8 2 - 3 1 7 5 4 6 8 - 2 3 1 7 5 4 6 8 1 2 3 - 7 5 4 6 8 1 2 3 4 7 5 - 6 8 1 2 3 4 7 5 6 - 8 1 2 3 4 - 5 6 7 8
Download A2grader.tar and extract A2grader.py from it, before running next code cell.
%run -i A2grader.py
======================= Code Execution ======================= Extracting python code from notebook named 'Anderson-A2.ipynb' and storing in notebookcode.py Removing all statements that are not function or class defs or import statements. Searching this graph: {'a': ['b', 'z', 'd'], 'b': ['a'], 'e': ['z'], 'd': ['y'], 'y': ['z']} Looking for path from a to y with max depth of 1. 5/ 5 points. Your search correctly returned cutoff Looking for path from a to z with max depth of 5. 10/10 points. Your search correctly returned ['a', 'z'] Testing find_blank_8p([1, 2, 3, 4, 5, 6, 7, 0, 8]) 5/ 5 points. Your find_blank_8p correctly returned 2 1 Testing actions_f_8p([1, 2, 3, 4, 5, 6, 7, 0, 8]) 10/10 points. Your actions_f_8p correctly returned ['left', 'right', 'up'] Testing take_action_f_8p([1, 2, 3, 4, 5, 6, 7, 0, 8], up) 10/10 points. Your take_actions_f_8p correctly returned [1, 2, 3, 4, 0, 6, 7, 5, 8] Testing iterative_deepening_search([1, 2, 3, 4, 5, 6, 7, 0, 8], [0, 2, 3, 1, 4, 6, 7, 5, 8], actions_f_8p, take_action_f_8p, 5) 20/20 points. Your search correctly returned [1, 2, 3, 4, 5, 6, 7, 0, 8] [1, 2, 3, 4, 0, 6, 7, 5, 8] [1, 2, 3, 0, 4, 6, 7, 5, 8] [0, 2, 3, 1, 4, 6, 7, 5, 8] Testing iterative_deepening_search([5, 2, 8, 0, 1, 4, 3, 7, 6], [0, 2, 3, 1, 4, 6, 7, 5, 8], actions_f_8p, take_action_f_8p, 10) 10/10 points. Your search correctly returned cutoff. ====================================================================== A2 Execution Grade is 70 / 70 ====================================================================== __ / 10 points. At least four sentences describing the solutions found for the 8 puzzle. __ / 20 points. At least six sentences describing the second search problem, your implementation of state, and the solutions found. ====================================================================== A2 Additional Grade is __ / 30 ====================================================================== ====================================================================== A2 FINAL GRADE is _ / 100 ====================================================================== Extra Credit: Earn one point of extra credit for using your search functions to solve the variation of the grid problem in Assignment 1. A2 EXTRA CREDIT is 0 / 1
Check in your notebook for Assignment 2 on our Canvas site.
For extra credit, apply your solution to the grid example in Assignment 1 with the addition of at least one horizontal and at least one vertical barrier, all at least three positions long. Demonstrate the solutions found in four different pairs of start and goal states.